327 research outputs found

    Service Deployment Model on Shared Virtual Network Functions With Flow Partition

    Get PDF
    Network operators can operate services in a flexible way with virtual network functions thanks to the network function virtualization technology. Flow partition allows aggregated traffic to be split into multiple parts, which increases the flexibility. This paper proposes a service deployment model with flow partition to minimize the service deployment cost with meeting service delay requirements. A virtual network function of a service is allowed to have several instances, each of which hosts a part of flows and can be shared among different services, to reduce the initial and proportional cost. We provide the mathematical formulation for the proposed model and transform it to a special case as a mixed integer second-order cone programming (MISOCP) problem. A heuristic algorithm, which is called a flow partition heuristic (FPH), is introduced to solve the original problem in practical time by decomposing it into several steps; each step handles a convex problem. We compare the performances of proposed model with flow partition and conventional model without flow partition. We consider the formulated MISOCP problem with adopting a strategy of even splitting to divide flows in a special case, which is called an even spitting heuristic (ESH). The performances of FPH and ESH are compared in a realistic scenario. We also consider the formulated MISOCP problem as an original problem and compare it to an FPH-based heuristic algorithm with the even-splitting strategy (FPH-ES), in both realistic and synthetic scenarios. The numerical results reveal that the proposed model saves the service deployment cost compared to the conventional one. It improves the maximum admissible traffic scale by 23% in average in our examined cases. We observe that FPH outperforms ESH and ESH outperforms FPH-ES in terms of the service deployment cost in their own focused problems, respectively

    Delay Distribution Based Remote Data Fetch Scheme for Hadoop Clusters in Public Cloud

    Get PDF
    Apache Hadoop and its ecosystem have become the de facto platform for processing large-scale data, or Big Data, because it hides the complexity of distributed computing, scheduling, and communication while providing fault-tolerance. Cloud-based environments are becoming a popular platform for hosting Hadoop clusters due to their low initial cost and limitless capacity. However, cloud-based Hadoop clusters bring their own challenges due to contradictory design principles. Hadoop is designed on the shared-nothing principle while cloud is based on the concepts of consolidation and resource sharing. Most of Hadoop\u27s features are designed for on-premises data centers where the cluster topology is known. Hadoop depends on the rack assignment of servers (configured by the cluster administrator) to calculate the distance between servers. Hadoop calculates the distance between servers to find the best remote server from which to fetch data from when fetching non-local data. However, public cloud environment providers do not share rack information of virtual servers with their tenants. Lack of rack information of servers may allow Hadoop to fetch data from a remote server that is on the other side of the data center. To overcome this problem, we propose a delay distribution based scheme to find the closest server to fetch non-local data for public cloud-based Hadoop clusters. The proposed scheme bases server selection on the delay distributions between server pairs. Delay distribution is calculated measuring the round-trip time between servers periodically. Our experiments observe that the proposed scheme outperforms conventional Hadoop nearly by 12% in terms of non-local data fetch time. This reduction in data fetch time will lead to a reduction in job run time, especially in real-world multi-user clusters where non-local data fetching can happen frequently

    Self-adjustable domain adaptation in personalized ECG monitoring integrated with IR-UWB radar

    Get PDF
    To enhance electrocardiogram (ECG) monitoring systems in personalized detections, deep neural networks (DNNs) are applied to overcome individual differences by periodical retraining. As introduced previously [4], DNNs relieve individual differences by fusing ECG with impulse radio ultra-wide band (IR-UWB) radar. However, such DNN-based ECG monitoring system tends to overfit into personal small datasets and is difficult to generalize to newly collected unlabeled data. This paper proposes a self-adjustable domain adaptation (SADA) strategy to prevent from overfitting and exploit unlabeled data. Firstly, this paper enlarges the database of ECG and radar data with actual records acquired from 28 testers and expanded by the data augmentation. Secondly, to utilize unlabeled data, SADA combines self organizing maps with the transfer learning in predicting labels. Thirdly, SADA integrates the one-class classification with domain adaptation algorithms to reduce overfitting. Based on our enlarged database and standard databases, a large dataset of 73200 records and a small one of 1849 records are built up to verify our proposal. Results show SADA\u27s effectiveness in predicting labels and increments in the sensitivity of DNNs by 14.4% compared with existing domain adaptation algorithms

    Virtual Network Function Placement for Service Chaining by Relaxing Visit Order and Non-Loop Constraints

    Get PDF
    Network Function Virtualization (NFV) is a paradigm that virtualizes traditional network functions and instantiates Virtual Network Functions (VNFs) as software instances separate from hardware appliances. Service Chaining (SC), seen as one of the major NFV use cases, provides customized services to users by concatenating VNFs. A VNF placement model for SC that relaxes the visit order constraints of requested VNFs has been considered. Relaxing the VNF visit order constraints reduces the number of VNFs which need to be placed in the network. However, since the model does not permit any loop within an SC path, the efficiency of utilization of computation resources deteriorates in some topologies. This paper proposes a VNF placement model for SC which minimizes the cost for placing VNFs and utilizing link capacity while allowing both relaxation of VNF visit order constraints and configuration of SC paths including loops. The proposed model determines routes of requested SC paths, which can have loops, by introducing a logical layered network generated from an original physical network. This model is formulated as an Integer Linear Programming (ILP) problem. A heuristic algorithm is introduced for the case that the ILP problem is not tractable. Simulation results show that the proposed model provides SC paths with smaller cost compared to the conventional model

    Data-Importance-Aware Bandwidth-Allocation Scheme for Point-Cloud Transmission in Multiple LIDAR Sensors

    Get PDF
    This paper addresses bandwidth allocation to multiple light detection and ranging (LIDAR) sensors for smart monitoring, which a limited communication capacity is available to transmit a large volume of point-cloud data from the sensors to an edge server in real time. To deal with the limited capacity of the communication channel, we propose a bandwidth-allocation scheme that assigns multiple point-cloud compression formats to each LIDAR sensor in accordance with the spatial importance of the point-cloud data transmitted by the sensor. Spatial importance is determined by estimating how objects, such as cars, trucks, bikes, and pedestrians, are likely to exist since regions where objects are more likely to exist are more useful for smart monitoring. A numerical study using a real point-cloud dataset obtained at an intersection indicates that the proposed scheme is superior to the benchmarks in terms of the distributions of data volumes among LIDAR sensors and quality of point-cloud data received by the edge server

    Virtual network function placement and routing for multicast service chaining using merged paths

    Get PDF
    This paper proposes a virtual network function placement and routing model for multicast service chaining based on merging multiple service paths (MSC-M). The multicast service chaining (MSC) is used for providing a network-virtualization based multicast service. The MSC sets up a multicast path, which connects a source node and multiple destination nodes. Virtual network functions (VNFs) are placed on the path so that users on the destination nodes receive their desired services. The conventional MSC model configures multicast paths for services, each of which has the same source data and the same set of VNFs in a predefined order. In the MSC-M model, if paths of different services carry the same data on the same link, these paths are allowed to be merged into one path at that link, which improves the utilization of network resources. The MSC-M model determines the placement of VNFs and the route of paths so that the total cost associated with VNF placement and link usage is minimized. The MSC-M model is formulated as an integer linear programming (ILP) Problem. We prove that the decision version of VNF placement and routing problem based on the MSC-M model is NP-complete. A heuristic algorithm is introduced for the case that the ILP problem is intractable. Numerical results show that the MSC-M model reduces the total cost required to accommodate service chaining requests compared to the conventional MSC model. We discuss directions for extending the MSC-M model to an optical domain

    Data assessment and prioritization in mobile networks for real-time prediction of spatial information using machine learning

    Get PDF
    A new framework of data assessment and prioritization for real-time prediction of spatial information is presented. The real-time prediction of spatial information is promising for next-generation mobile networks. Recent developments in machine learning technology have enabled prediction of spatial information, which will be quite useful for smart mobility services including navigation, driving assistance, and self-driving. Other key enablers for forming spatial information are image sensors in mobile devices like smartphones and tablets and in vehicles such as cars and drones and real-time cognitive computing like automatic number/license plate recognition systems and object recognition systems. However, since image data collected by mobile devices and vehicles need to be delivered to the server in real time to extract input data for real-time prediction, the uplink transmission speed of mobile networks is a major impediment. This paper proposes a framework of data assessment and prioritization that reduces the uplink traffic volume while maintaining the prediction accuracy of spatial information. In our framework, machine learning is used to estimate the importance of each data element and to predict spatial information under the limitation of available data. A numerical evaluation using an actual vehicle mobility dataset demonstrated the validity of the proposed framework. Two extension schemes in our framework, which use the ensemble of importance scores obtained from multiple feature selection methods, are also presented to improve its robustness against various machine learning and feature selection methods. We discuss the performance of those schemes through numerical evaluation

    Virtualized Network Graph Design and Embedding Model to Minimize Provisioning Cost

    Get PDF
    The provisioning cost of a virtualized network (VN) depends on several factors, including the numbers of virtual routers (VRs) and virtual links (VLs), mapping of them on a substrate infrastructure, and routing of data traffic. An existing model, known as the virtual network embedding (VNE) model, determines the embedding of given VN graphs into the substrate infrastructure. When the resource allocation model of the VNE problem is adopted to a single-entity scenario, where a single entity fulfills the roles of both a service provider and an infrastructure provider, an issue of increased costs of VNs and access paths arise. This paper proposes a model for virtualized network graph design and embedding (VNDE) for the single-entity scenario. The VNDE model determines the number of VRs and a VN graph for each request in conjunction with embedding. The VNDE model also determines access paths that connect customer premises and VRs. We formulate the VNDE model as an integer linear programming (ILP) problem. We develop heuristic algorithms for the cases where the ILP problem cannot be solved in practical time. We evaluate the performance of the VNDE model on several networks, including an actual Japanese academic backbone network. Numerical results show that the proposed model designs suitable VN graphs and embeds them according to the volume of traffic demands and access path cost. Compared with the benchmark model, which is based on a classic VNE approach, the proposed model reduces the provisioning cost at most 28.7% in our examined scenarios

    グラフ圧縮による媒介中心性の計算手法

    Get PDF
    本論文はグラフの各点の媒介中心性を求める計算手法を提案する.それは次数が1である点をグラフから除き,圧縮されたグラフで計算する.提案手法が,従来の手法の次数が1である点が存在するグラフで生じる冗長な計算を回避し,計算量を削減することを示す.This paper proposes a computation method to find a betweenness centrality of each vertex of a graph. The method compresses the original graph by removing vertices whose degree is one from the graph. The betweenness centrality is then calculated from the compressed graph. This avoids avoid blackundancy of the computation in the conventional method without the graph compression. As a result, the calculation time is blackuced

    Dynamic load balancing with learning model for Sudoku solving system

    Get PDF
    This paper proposes a dynamic load balancing with learning model for a Sudoku problem solving system that has multiple workers and multiple solvers. The objective is to minimise the total processing time of problem solving. Our load balancing with learning model distributes each Sudoku problem to an appropriate pair of worker and solver when it is received by the system. The information of the estimated solution time for a specific number of given input values, the estimated finishing time of each worker, and the idle status of each worker are used to determine the worker-solver pairs. In addition, the proposed system can estimate the waiting period for each problem. Test results show that the system has shorter processing time than conventional alternatives
    corecore